Robustness to adversarial examples can be improved with overfitting

نویسندگان
چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Parseval Networks: Improving Robustness to Adversarial Examples

We introduce Parseval networks, a form of deep neural networks in which the Lipschitz constant of linear, convolutional and aggregation layers is constrained to be smaller than 1. Parseval networks are empirically and theoretically motivated by an analysis of the robustness of the predictions made by deep neural networks when their input is subject to an adversarial perturbation. The most impor...

متن کامل

Parseval Networks: Improving Robustness to Adversarial Examples

We introduce Parseval networks, a form of deep neural networks in which the Lipschitz constant of linear, convolutional and aggregation layers is constrained to be smaller than 1. Parseval networks are empirically and theoretically motivated by an analysis of the robustness of the predictions made by deep neural networks when their input is subject to an adversarial perturbation. The most impor...

متن کامل

Analyzing the Robustness of Nearest Neighbors to Adversarial Examples

Motivated by applications such as autonomous vehicles, test-time attacks via ad-versarial examples have recently received a great deal of attention. In this setting, anadversary is capable of making queries to a classifier, and perturbs an example by asmall amount in order to force the classifier to report an incorrect label. While a longline of work has explored a number of...

متن کامل

Robustness to Adversarial Examples through an Ensemble of Specialists

Due to the recent breakthroughs achieved by Convolutional Neural Networks (CNNs) for various computer vision tasks (He et al., 2015; Taigman et al., 2014; Karpathy et al., 2014), CNNs are highly regarded technology for inclusion into real-life vision applications. However, CNNs have a high risk of failing due to adversarial examples, which fool them consistently with the addition of small pertu...

متن کامل

Learning with ensembles: How overfitting can be useful

We study the characteristics of learning with ensembles. Solving exactly the simple model of an ensemble of linear students, we find surprisingly rich behaviour. For learning in large ensembles, it is advantageous to use under-regularized students, which actually over-fit the training data. Globally optimal performance can be obtained by choosing the training set sizes of the students appropria...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: International Journal of Machine Learning and Cybernetics

سال: 2020

ISSN: 1868-8071,1868-808X

DOI: 10.1007/s13042-020-01097-4